6.1-2
The assertion is obvious for heaps with height $0$, so we may assume heaps which we are considering have height $\geq 1$.
Observe that if there are $m$ nodes in level of height $h-1$, then there are at most $2m$ nodes in level of height $h$. And there is only $1$ node in height $0$ (the root of heap), it decuces that we have $2^h$ nodes in height $h$. Note that any node exists in level of hieght $h$ implies that all levels from $0$ to $h-1$ are full (ie. every node in level of height $< h-1$ has two children). Therefore, a heap with height $h$ has $n = \sum_{k=0}^{h-1}{2^k} + N_h$ nodes, where $1 \leq N_h \leq 2^h$ means number of nodes in level of height $h$. Then we have
$$2^h = \sum_{k=0}^{h-1}{2^k} + 1 \leq n \leq \sum_{k=0}^{h-1}{2^k} + 2^h = 2^{h+1}-1$$
This completes the proof.
6-3
b.
Let $A[1...m][1...n]$ be a Young tableau as input data. We may assume that $m \geq 1$ and $n \geq 1$. As the following algorithm, we can observe that we do $1$ swap in Extract-Min, and call Re-Youngnify with a matrix which is either $A[2...m][1...n]$ or $A[1...m][2...n]$. Since Re-Youngnify is recursive and it does at most $1$ recursive function call, the recursive relation of Re-Youngnify's time consumption is either $T(m, n) = \Theta(1) + T(m-1,n)$ or $T(m, n) = \Theta(1) + T(m,n-1)$. It's not difficult to see Re-Youngnify has $O(m+n)$ time complexity and hence Extract-Min also.
Extract-Min($A$, $m$, $n$):
min = $A[1][1]$
$A[1][1] = \infty$
// Check $m>1$ for avoiding illegal access
if $m > 1$:
// Check $n>1$ for avoiding illegal access
if $n > 1$:
if $A[1][2] < A[2][1]$:
swap $A[1][2]$, $A[1][1]$
Re-Youngnify( $A$, $1$, $m$, $2$, $n$ )
else:
swap $A[2][1]$, $A[1][1]$
Re-Youngnify( $A$, $2$, $m$, $1$, $n$ )
else:
// Check $n>1$ for avoiding illegal access
if $n > 1$:
swap $A[1][2]$, $A[1][1]$
Re-Youngnify( $A$, $1$, $m$, $2$, $n$ )
return min
// Input data is $A[p_1...r_1][p_2...r_2]$
Re-Youngnify($A$, $p_1$, $r_1$, $p_2$, $r_2$):
// Check range
if $p_1 < r_1$:
if $A[p_1+1][p_2] < A[p_1][p_2]$:
// Check range
if $p_2 < r_2$:
if $A[p_1][p_2+1] < A[p_1+1][p_2]$:
swap $A[p_1][p_2+1]$, $A[p_1][p_2]$
Re-Youngnify( $A$, $p_1$, $r_1$, $p_2+1$, $r_2$ )
else:
swap $A[p_1+1][p_2]$, $A[p_1][p_2]$
Re-Youngnify( $A$, $p_1+1$, $r_1$, $p_2$, $r_2$ )
else:
// Check range
if $p_2 < r_2$:
if $A[p_1][p_2+1] < A[p_1][p_2]$:
swap $A[p_1][p_2+1]$, $A[p_1][p_2]$
Re-Youngnify( $A$, $p_1$, $r_1$, $p_2+1$, $r_2$ )
e.
As our analysis in b. , we can extract the minimal value in $O(m+n)$ time from a $m \times n$ Young's tableau. So, for a $n \times n$ Young's tableau, we can do $n^2$ times extraction in $O( n^2 \cdot (n+n) ) = O(n^3)$ time to get a sorted sequence.
7.4-2
Observe that the recursive relation of quicksort's time consumption is $T(n) = \Theta(n) + T(n_1) + T(n_2)$ with $n = n_1 + n_2 + 1$, where $\Theta(n)$ term denotes the partition operation. To obtain the best case, it's not difficult to see that the recurrence tree has minimal height with $n_1$, $n_2$ are exactly about $\frac{n}{2}$, ie. $T(n) = \Theta(n) + 2T(\frac{n}{2})$. Therefore, by Master Theorem, we have $T(n) = \Theta(n \lg{n})$, hence $T(n) = \Omega(n \lg{n})$.
7-1
a.
When HOARE-PARTITION starts:
$x = 13$
$i = 0$
$j = 13$
$A = [13, 19, 9, 5, 12, 8, 7, 4, 11, 2, 6, 21]$
When iteration 1 terminates:
$x = 13$
$i = 1$
$j = 11$
$A = [6, 19, 9, 5, 12, 8, 7, 4, 11, 2, 13, 21]$
When iteration 2 terminates:
$x = 13$
$i = 2$
$j = 10$
$A = [6, 2, 9, 5, 12, 8, 7, 4, 11, 19, 13, 21]$
When iteration 3 terminates:
$x = 13$
$i = 10$
$j = 9$
$A = [6, 2, 9, 5, 12, 8, 7, 4, 11, 19, 13, 21]$
Then the comparison $i<j$ gets FALSE and HOARE-PARTITION return 9.
b.
Variable $i$ starts from $p-1$ and be imposed at least $1$ time increment operation before we do access $A[i]$, so we will never access $A$ with index $i < p$. And, variable $j$ starts from $r+1$ and be imposed at least $1$ time decrement operation before we do access $A[j]$, so we will never access $A$ with index $j > r$. On the other hand, when $i \geq j$, HOARE-PARTITION terminates. If $i$ == $j$, then we get $i \leq r$ and $j \geq p$.
To complete the proof, observe that (to get clearer explanation, see Initialization and Maintenance parts in d.) there are at least $1$ time decrement operation being imposed on $j$ and $x < A[k]$ for $j+1 \leq k \leq r$, in every loop iteration(loop in lines 4-13). Similarly, there are at least $1$ time increment operation being imposed on $i$ and $A[k] < x$ for $p \leq k \leq i-1$, in every loop iteration(loop in lines 4-13).
If $i > r$ occurs, we must have $j$ == $r$ (since $i > r \implies A[r] < x$ and $j < r \implies A[r] \geq x$, but they are mutually exclusive), that implies current iteration is the first iteration. But the assignment $x = A[p]$ before loop starts will lead a contradiction. Therefore, we must have $i \leq r$.
On the other hand, if $j < p$ occurs, we must have $i$ == $p$ (since $j < p \implies A[p] > x$ and $i > p \implies A[p] \leq x$, but they are mutually exclusive), that implies current iteration is the first iteration. But the assignment $x = A[p]$ before loop starts will lead a contradiction. Therefore, we must have $j \geq p$.
c.
As we have discussed in b. , we already have $p \leq i, j \leq r$, so the only part need to be proved is $j \neq r$.
Assuming the return value $j$ is equal to $r$, it implies $i \geq j$ in the final iteration (of loop in lines 4-13), hence $A[p]< x $ and $i$ == $r$. But $A[p] < x$ means the content at index $p$ had been modified and the current iteration (of loop in lines 4-13) is not the first iteration. Since every iteration (of loop in lines 4-13) impose at least $1$ time decrement operation on $j$, we must have $j < r$ and a contradiction occurs. This completes the proof.
d.
We use loop invariant to prove the assertion.
Loop Invariant (loop in lines 4-13):
At the start of each loop iteration, $A[k] \leq x$ for $p \leq k \leq i$ and $x \leq A[k]$ for $j \leq k \leq r$. $(*)$
Initialization:
When the first iteration starts, $(*)$ holds obviously since $A[p...i]$ and $A[j...r]$ are empty.
Maintenance:
Let variable $i$ == $i'$, variable $j$ == $j'$ and $(*)$ holds for value $i'$, $j'$ when current iteration starts. (Be careful with the meaning difference between variable $i$, $j$ and value of $i$, $j$)
Considering the evolution of variable $i$ and the evolution of variable $j$. Variable $i$'s incremental process get paused when $A[i] \geq x$; variable $j$'s decremental process get paused when $A[j] \leq x$. So far, $A[k] \leq x$ for $p \leq k \leq i-1$ and $x \leq A[k]$ for $j+1 \leq k \leq r$ (ie. $(*)$ holds for $i-1$, $j+1$). If $i \geq j$, then HOARE-PARTITION terminates. If $i < j$, then swap operation occurs and $(*)$ holds for current value of $i$, $j$ after swap operation.
Termination:
When the loop terminates, we have $i \geq j$.
In case of $i > j$, $(*)$ holds for $i-1$, $j+1$ and the assertion follows.
In case of $i$ == $j$, $(*)$ holds for $i-1$, $j+1$, combining this with $A[i]$ == $A[j] \leq x$ will complete the proof.